16,228 research outputs found

    Model Selection: Two Fundamental Measures of Coherence and Their Algorithmic Significance

    Full text link
    The problem of model selection arises in a number of contexts, such as compressed sensing, subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper generalizes the notion of \emph{incoherence} in the existing literature on model selection and introduces two fundamental measures of coherence---termed as the worst-case coherence and the average coherence---among the columns of a design matrix. In particular, it utilizes these two measures of coherence to provide an in-depth analysis of a simple one-step thresholding (OST) algorithm for model selection. One of the key insights offered by the ensuing analysis is that OST is feasible for model selection as long as the design matrix obeys an easily verifiable property. In addition, the paper also characterizes the model-selection performance of OST in terms of the worst-case coherence, \mu, and establishes that OST performs near-optimally in the low signal-to-noise ratio regime for N x C design matrices with \mu = O(N^{-1/2}). Finally, in contrast to some of the existing literature on model selection, the analysis in the paper is nonasymptotic in nature, it does not require knowledge of the true model order, it is applicable to generic (random or deterministic) design matrices, and it neither requires submatrices of the design matrix to have full rank, nor does it assume a statistical prior on the values of the nonzero entries of the data vector.Comment: 5 pages; Accepted for Proc. 2010 IEEE International Symposium on Information Theory (ISIT 2010

    Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression

    Full text link
    The linear model, in which a set of observations is assumed to be given by a linear combination of columns of a matrix, has long been the mainstay of the statistics and signal processing literature. One particular challenge for inference under linear models is understanding the conditions on the dictionary under which reliable inference is possible. This challenge has attracted renewed attention in recent years since many modern inference problems deal with the "underdetermined" setting, in which the number of observations is much smaller than the number of columns in the dictionary. This paper makes several contributions for this setting when the set of observations is given by a linear combination of a small number of groups of columns of the dictionary, termed the "block-sparse" case. First, it specifies conditions on the dictionary under which most block subdictionaries are well conditioned. This result is fundamentally different from prior work on block-sparse inference because (i) it provides conditions that can be explicitly computed in polynomial time, (ii) the given conditions translate into near-optimal scaling of the number of columns of the block subdictionaries as a function of the number of observations for a large class of dictionaries, and (iii) it suggests that the spectral norm and the quadratic-mean block coherence of the dictionary (rather than the worst-case coherences) fundamentally limit the scaling of dimensions of the well-conditioned block subdictionaries. Second, this paper investigates the problems of block-sparse recovery and block-sparse regression in underdetermined settings. Near-optimal block-sparse recovery and regression are possible for certain dictionaries as long as the dictionary satisfies easily computable conditions and the coefficients describing the linear combination of groups of columns can be modeled through a mild statistical prior.Comment: 39 pages, 3 figures. A revised and expanded version of the paper published in IEEE Transactions on Information Theory (DOI: 10.1109/TIT.2015.2429632); this revision includes corrections in the proofs of some of the result

    Frame Coherence and Sparse Signal Processing

    Full text link
    The sparse signal processing literature often uses random sensing matrices to obtain performance guarantees. Unfortunately, in the real world, sensing matrices do not always come from random processes. It is therefore desirable to evaluate whether an arbitrary matrix, or frame, is suitable for sensing sparse signals. To this end, the present paper investigates two parameters that measure the coherence of a frame: worst-case and average coherence. We first provide several examples of frames that have small spectral norm, worst-case coherence, and average coherence. Next, we present a new lower bound on worst-case coherence and compare it to the Welch bound. Later, we propose an algorithm that decreases the average coherence of a frame without changing its spectral norm or worst-case coherence. Finally, we use worst-case and average coherence, as opposed to the Restricted Isometry Property, to garner near-optimal probabilistic guarantees on both sparse signal detection and reconstruction in the presence of noise. This contrasts with recent results that only guarantee noiseless signal recovery from arbitrary frames, and which further assume independence across the nonzero entries of the signal---in a sense, requiring small average coherence replaces the need for such an assumption

    Logic Integer Programming Models for Signaling Networks

    Full text link
    We propose a static and a dynamic approach to model biological signaling networks, and show how each can be used to answer relevant biological questions. For this we use the two different mathematical tools of Propositional Logic and Integer Programming. The power of discrete mathematics for handling qualitative as well as quantitative data has so far not been exploited in Molecular Biology, which is mostly driven by experimental research, relying on first-order or statistical models. The arising logic statements and integer programs are analyzed and can be solved with standard software. For a restricted class of problems the logic models reduce to a polynomial-time solvable satisfiability algorithm. Additionally, a more dynamic model enables enumeration of possible time resolutions in poly-logarithmic time. Computational experiments are included

    Long Term Trends in Resource Exergy Consumption and Useful Work Supplies in the UK, 1900-2000

    Get PDF
    Our aim is to explain historical economic growth in the UK economy by introducing an empirical measure for useful work derived from natural resource energy inputs into an augmented production function. To do this, we estimate the long-term (1900-2000) trends in resource exergy supply and conversion to useful work in the United Kingdom. The exergy resources considered included domestic consumption of coal, crude oil and petroleum products, natural gas, nuclear and renewable resources (including biomass). All flows of exergy were allocated to an end use such as providing heat, light, transport, human and animal work and electrical power. For each end-use we estimated a time dependent efficiency of conversion from exergy to useful work. The 3-factor production function (of capital, labour and useful work) is able to reproduce the historic trajectory of economic growth without recourse to any exogenous assumptions of technological progress or total factor productivity. The results indicate that useful work derived from natural resource exergy is an important factor of production.exergy, energy, efficiency, economic growth, United Kingdom

    Real-time three-dimensional ultrasound : a valuable new tool in preoperative assessment of complex congenital cardiac disease

    Get PDF
    Evaluating complex cardiac defects in small children preoperatively requires multiple diagnostic procedures including echocardiography, and also invasive methods such as cardiac catheterisation, computer-tomography and magnetic resonance imaging. This article assesses the complex anatomy of the atrioventricular valves in atrioventricular septal defect using bedside real-time three-dimensional echocardiography and comparing these results to the anatomic findings at the time of operative intervention.peer-reviewe

    Long-range interactions between an atom in its ground S state and an open-shell linear molecule

    Full text link
    Theory of long-range interactions between an atom in its ground S state and a linear molecule in a degenerate state with a non-zero projection of the electronic orbital angular momentum is presented. It is shown how the long-range coefficients can be related to the first and second-order molecular properties. The expressions for the long-range coefficients are written in terms of all components of the static and dynamic multipole polarizability tensor, including the nonadiagonal terms connecting states with the opposite projection of the electronic orbital angular momentum. It is also shown that for the interactions of molecules in excited states that are connected to the ground state by multipolar transition moments additional terms in the long-range induction energy appear. All these theoretical developments are illustrated with the numerical results for systems of interest for the sympathetic cooling experiments: interactions of the ground state Rb(2^2S) atom with CO(3Π^3\Pi), OH(2Π^2\Pi), NH(1Δ^1\Delta), and CH(2Π^2\Pi) and of the ground state Li(2^2S) atom with CH(2Π^2\Pi).Comment: 30 pages, 3 figure
    • …
    corecore